40 research outputs found

    ToolNet: Holistically-Nested Real-Time Segmentation of Robotic Surgical Tools

    Get PDF
    Real-time tool segmentation from endoscopic videos is an essential part of many computer-assisted robotic surgical systems and of critical importance in robotic surgical data science. We propose two novel deep learning architectures for automatic segmentation of non-rigid surgical instruments. Both methods take advantage of automated deep-learning-based multi-scale feature extraction while trying to maintain an accurate segmentation quality at all resolutions. The two proposed methods encode the multi-scale constraint inside the network architecture. The first proposed architecture enforces it by cascaded aggregation of predictions and the second proposed network does it by means of a holistically-nested architecture where the loss at each scale is taken into account for the optimization process. As the proposed methods are for real-time semantic labeling, both present a reduced number of parameters. We propose the use of parametric rectified linear units for semantic labeling in these small architectures to increase the regularization ability of the design and maintain the segmentation accuracy without overfitting the training sets. We compare the proposed architectures against state-of-the-art fully convolutional networks. We validate our methods using existing benchmark datasets, including ex vivo cases with phantom tissue and different robotic surgical instruments present in the scene. Our results show a statistically significant improved Dice Similarity Coefficient over previous instrument segmentation methods. We analyze our design choices and discuss the key drivers for improving accuracy.Comment: Paper accepted at IROS 201

    Deep Sequential Mosaicking of Fetoscopic Videos

    Get PDF
    Twin-to-twin transfusion syndrome treatment requires fetoscopic laser photocoagulation of placental vascular anastomoses to regulate blood flow to both fetuses. Limited field-of-view (FoV) and low visual quality during fetoscopy make it challenging to identify all vascular connections. Mosaicking can align multiple overlapping images to generate an image with increased FoV, however, existing techniques apply poorly to fetoscopy due to the low visual quality, texture paucity, and hence fail in longer sequences due to the drift accumulated over time. Deep learning techniques can facilitate in overcoming these challenges. Therefore, we present a new generalized Deep Sequential Mosaicking (DSM) framework for fetoscopic videos captured from different settings such as simulation, phantom, and real environments. DSM extends an existing deep image-based homography model to sequential data by proposing controlled data augmentation and outlier rejection methods. Unlike existing methods, DSM can handle visual variations due to specular highlights and reflection across adjacent frames, hence reducing the accumulated drift. We perform experimental validation and comparison using 5 diverse fetoscopic videos to demonstrate the robustness of our framework.Comment: Accepted at MICCAI 201

    Leveraging the Fulcrum Point in Robotic Minimally Invasive Surgery

    No full text
    In robotic Minimally Invasive Surgery (MIS) the incision point acts as a fulcrum around which the surgical instrument pivots. The fulcrum point has been the topic of much research. Mechanisms have been invented to enforce instrument motion about such a fulcrum. Other systems establish a fulcrum through coordinated control of their joints. For laparoscopists, the fulcrum point is an obstacle to overcome through a lot of training. For robots, it is a hurdle requiring careful consideration. In this paper, new estimation methods are proposed to exploit the properties of a fulcrum and turn its presence into an advantage. The paper starts by presenting a novel fulcrum estimation method that is robust against measurement noise and quantization effects. A general fulcrum refinement method is proposed next. This method can be used as add-on to ameliorate alternative estimation approaches. It is shown how the fulcrum can also be leveraged to get accurate, high-bandwidth estimates of the instrument tip. The quality with which the instrument tip is estimated has a large impact on the performance of advanced guidance schemes, such as haptic virtual walls. User tests are included, demonstrating substantially improved guidance thanks to the algorithms presented in this work.status: publishe

    A mixed-reality surgical trainer with comprehensive sensing for fetal laser minimally invasive surgery

    Get PDF
    PURPOSE: Smaller incisions and reduced surgical trauma made minimally invasive surgery (MIS) grow in popularity even though long training is required to master the instrument manipulation constraints. While numerous training systems have been developed in the past, very few of them tackled fetal surgery and more specifically the treatment of twin-twin transfusion syndrome (TTTS). To address this lack of training resources, this paper presents a novel mixed-reality surgical trainer equipped with comprehensive sensing for TTTS procedures. The proposed trainer combines the benefits of box trainer technology and virtual reality systems. Face and content validation studies are presented and a use-case highlights the benefits of having embedded sensors. METHODS: Face and content validity of the developed setup was assessed by asking surgeons from the field of fetal MIS to accomplish specific tasks on the trainer. A small use-case investigates whether the trainer sensors are able to distinguish between an easy and difficult scenario. RESULTS: The trainer was deemed sufficiently realistic and its proposed tasks relevant for practicing the required motor skills. The use-case demonstrated that the motion and force sensing capabilities of the trainer were able to analyze surgical skill. CONCLUSION: The developed trainer for fetal laser surgery was validated by surgeons from a specialized center in fetal medicine. Further similar investigations in other centers are of interest, as well as quality improvements which will allow to increase the difficulty of the trainer. The comprehensive sensing appeared to be capable of objectively assessing skill

    Robotic Endoscope Control via Autonomous Instrument Tracking

    Full text link
    Many keyhole interventions rely on bi-manual handling of surgical instruments, forcing the main surgeon to rely on a second surgeon to act as a camera assistant. In addition to the burden of excessively involving surgical staff, this may lead to reduced image stability, increased task completion time and sometimes errors due to the monotony of the task. Robotic endoscope holders, controlled by a set of basic instructions, have been proposed as an alternative, but their unnatural handling may increase the cognitive load of the (solo) surgeon, which hinders their clinical acceptance. More seamless integration in the surgical workflow would be achieved if robotic endoscope holders collaborated with the operating surgeon via semantically rich instructions that closely resemble instructions that would otherwise be issued to a human camera assistant, such as "focus on my right-hand instrument". As a proof of concept, this paper presents a novel system that paves the way towards a synergistic interaction between surgeons and robotic endoscope holders. The proposed platform allows the surgeon to perform a bimanual coordination and navigation task, while a robotic arm autonomously performs the endoscope positioning tasks. Within our system, we propose a novel tooltip localization method based on surgical tool segmentation and a novel visual servoing approach that ensures smooth and appropriate motion of the endoscope camera. We validate our vision pipeline and run a user study of this system. The clinical relevance of the study is ensured through the use of a laparoscopic exercise validated by the European Academy of Gynaecological Surgery which involves bi-manual coordination and navigation. Successful application of our proposed system provides a promising starting point towards broader clinical adoption of robotic endoscope holders.Comment: Caspar Gruijthuijsen and Luis C. Garcia-Peraza-Herrera have contributed equally to this work and share first authorshi
    corecore